Biological Imaging
◐ Cambridge University Press (CUP)
Preprints posted in the last 7 days, ranked by how well they match Biological Imaging's content profile, based on 15 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Pore, M.; Balamurugan, K.; Atkinson, A.; Breen, D.; Mallory, P.; Cardamone, A.; McKennett, L.; Newkirk, C.; Sharan, S.; Bocik, W.; Sterneck, E.
Show abstract
Circulating tumor cells (CTCs), and especially CTC-clusters, are linked to poor prognosis and may reveal mechanisms of metastasis and treatment resistance. Therefore, developing unbiased methods for the functional characterization of CTCs in liquid biopsies is an urgent need. Here, we present an evaluation of multiplex imaging mass cytometry (IMC) to analyze CTCs in mice with human xenograft tumors. In a single-step process, IMC uses metal-labeled antibodies to simultaneously detect a large number of proteins/modifications within minimally manipulated small volumes of blood from the tail vein or heart. We used breast cancer cell lines and a patient-derived xenograft (PDX) to assess antibodies for cross-species interpretation. Along with manual verification, HALO-AI-based cell segmentation was used to identify CTCs and quantify markers. Despite some limitations regarding human-specificity, this technology can be used to investigate the effect of genetic and pharmacological interventions on the properties of single and cluster CTCs in tumor-bearing mice.
Adeluwoye, A. O.; Gbadegesin, M. O.; James, F. M.; Otegbade, P. S.; Alabetutu, A.
Show abstract
Digital pathology, coupled with advanced image recognition algorithms, represents a transformative frontier in histopathological diagnosis. This sub-Saharan African laboratorys exploratory study investigates the application of a Convolutional Neural Network (CNN) model, specifically leveraging the VGG16 architecture with transfer learning, for automated analysis and classification of selected gastrointestinal (GIT) and liver tissue samples, incorporating both routine and specialized staining protocols. The study utilized a dataset comprising 114 samples (18 liver, 96 GIT images) derived from archival formalin-fixed paraffin-embedded tissue blocks at University College Hospital, Ibadan, Nigeria. Specialized staining techniques included Alcian Yellow for GIT mucin visualization and Massons Trichrome for liver fibrosis assessment, alongside conventional H&E staining. Model performance was evaluated using statistical methodologies including Wilson Score confidence intervals (CI), Bayesian probability assessment, and effect size analysis. Results reveal a striking dichotomy in model performance. The GIT tissue model achieved perfect classification accuracy (100% test accuracy) with exceptional statistical significance (Z=10.0, p<0.0001), Wilson CI [96.29%, 99.99%], Cohens h=1.571, and Bayesian probability >99.99%. Conversely, the liver tissue model demonstrated diagnostic failure (42.86% test accuracy), with Z=-1.428, p=0.9236, Wilson CI [33.59%, 52.65%], Cohens h=-0.144, and Bayesian probability of 7.64%. This performance divergence correlates with training data availability, as the liver dataset fell far below empirically established thresholds (>100-200 samples) for reliable classification. The liver models failure reveals limitations in transfer learning with insufficient data. These findings underscore critical implications for AI-enhanced digital pathology, demonstrating potential deployment of the GIT model as a promising one that supports tissue-specific model development.
Sarwin, G.; Ricciuti, V.; Staartjes, V. E.; Carretta, A.; Daher, N.; Li, Z.; Regli, L.; Mazzatenta, D.; Zoli, M.; Seungjun, R.; Konukoglu, E.; Serra, C.
Show abstract
Background and Objectives: We report the first intraoperative deployment of a real-time machine vision system in neurosurgery, derived from our previous anatomical detection work, automatically identifying structures during endoscopic endonasal surgery. Existing systems demonstrate promising performance in offline anatomical recognition, yet so far none have been implemented during live operations. Methods: A real-time anatomy detection model was trained using the YOLOv8 architecture (Ultralytics). Following training completion in the PyTorch environment, the model was exported to ONNX format and further optimized using the NVIDIA TensorRT engine. Deployment was carried out using the NVIDIA Holoscan SDK, the system ran on an NVIDIA Clara AGX developer kit. We used the model for real-time recognition of intraoperative anatomical structures and compared it with the same video labelled manually as reference. Model performance was reported using the average precision at an intersection-over-union threshold of 0.5 (AP50). Furthermore, end-to-end delay from frame acquisition to the display of the annotated output was measured. Results: A mean AP50 of 0.56 was achieved. The model demonstrated reliable detection of the most relevant landmarks in the transsphenoidal corridor. The mean end-to-end latency of the model was 47.81 ms (median 46.57 ms). Conclusion: For the first time, we demonstrate that clinical-grade, real-time machine-vision assistance during neurosurgery is feasible and can provide continuous, automated anatomical guidance from the surgical field. This approach may enhance intraoperative orientation, reduce cognitive load, and offer a powerful tool for surgical training. These findings represent an initial step toward integrating real-time AI support into routine neurosurgical workflows.
Osman, M.; Ashwin, H.; Calder, G.; O'Toole, P.; Bakhiet, S. M.; Musa, A. M.; Kaye, P. M.; Fahal, A. H.
Show abstract
Mycetoma is a neglected tropical disease caused by various bacterial and fungal pathogens that has a significant health impact across a broad geographically defined "mycetoma belt" spanning South America, Africa and Asia. Histologically, mycetoma is characterised by invasive and destructive granuloma development in the skin, deep tissues and bone, leading to tissue destruction, deformities and high morbidity. The presence of macroscopic, highly compacted pathogen microcolonies, or "grains," is a key diagnostic feature, and the formation of grains supports pathogen persistence and disease chronicity. However, there is a paucity of information on immune responses in mycetoma patients and on the relative importance of phylogeny and/or grains in establishing the local immune landscape. Here, we used spatial proteomics to examine the distribution of 43 immune-related proteins in surgical biopsies from 11 patients with mycetoma of bacterial (Actinomycetoma; Actinomadura pelletierii and Streptomyces somaliensis; n=6) and fungal (Eumycetoma; Madurella mycetomatis; n=5) origin. Using mixed-effects modelling, an exploratory analysis across species and pathogen classes revealed few significant differences in immune marker expression. In contrast, and independently of pathogen class, the cellular infiltrate closest to grain boundaries had higher per-cell expression of CD66b+, ARG1, and VISTA. The preferential accumulation of CD66b+ARG1+VISTA+ cells at grain boundaries was confirmed by quantitative immunofluorescence analysis. Hence, the local tissue microenvironment surrounding the mycetoma grain represents a specialised immunosuppressive niche, with parallels to the tumour microenvironment.
Brito-Pacheco, D. A.; Giannopoulos, P.; Reyes-Aldasoro, C. C.
Show abstract
In this work, the impact of outliers on the performance of machine learning and deep learning models is investigated, specifically for the case of histopathological images of colorectal cancer stained with Haematoxylin and Eosin. The evaluation of the impact is done through the systematic comparison of one machine learning model (Random Forests) and one deep learning model (ResNet-18). Both models were trained with the popular NCT-CRC-HE-VAL-100K dataset and tested on the CRC-HE-VAL-7K companion set. Then, a curation process was performed by analysing the divergence of patches based on chromatic, textural and topological features of the training set and removing outliers to repeat the training with a cleaned dataset. The results showed that machine learning models, can benefit more from improvements in the quality of data, than deep learning models. Further, the results suggest that deep learning models are more robust to outliers as, through the training process, the architectures can learn features other than those previously mentioned.
Alqaderi, H.; Kapadia, U.; Brahmbhatt, Y.; Papathanasiou, A.; Rodgers, D.; Arsenault, P.; Cardarelli, J.; Zavras, A.; Li, H.
Show abstract
BackgroundDental caries and periodontal disease represent the most prevalent global oral health conditions, collectively affecting several billion people. The diagnostic interpretation of dental radiographs, a cornerstone of modern dentistry, is associated with considerable inter-observer variability. In routine clinical practice, clinicians are required to evaluate a high volume of radiographic images daily, a cognitively demanding task in which diagnostic fatigue, time constraints, and the inherent complexity of overlapping anatomical structures can lead to the inadvertent oversight of early-stage pathologies. Artificial intelligence (AI) offers a transformative opportunity to augment clinical decision-making by providing rapid, objective, and consistent radiographic analysis, thereby serving as a tireless adjunct capable of flagging findings that may be missed during routine human inspection. MethodsThis study developed and validated a deep learning system for the automated detection of dental caries and alveolar bone loss using a dataset of 1,063 periapical and bitewing radiographs. Two separate YOLOv8s object detection models were trained and evaluated using a rigorous 5-fold cross-validation methodology. To align with the clinical use-case of a screening tool where high sensitivity is paramount, a custom image-level evaluation criterion was employed: a true positive was recorded if any predicted bounding box had a Jaccard Index (IoU) > 0 with any ground truth annotation. Model performance was systematically evaluated at confidence thresholds of 0.10 and 0.05. ResultsAt a confidence threshold of 0.05, the caries detection model achieved a mean precision of 84.41% ({+/-}0.72%), recall of 85.97% ({+/-}4.72%), and an F1-score of 85.13% ({+/-}2.61%). The alveolar bone loss model demonstrated exceptionally high performance, with a mean precision of 95.47% ({+/-}0.94%), recall of 98.60% ({+/-}0.49%), and an F1-score of 97.00% ({+/-}0.46%). ConclusionThe YOLOv8-based models demonstrated high accuracy and high sensitivity for detecting dental caries and alveolar bone loss on periapical radiographs. The system shows significant potential as a reliable automated assistant for dental practitioners, helping to improve diagnostic consistency, reduce the risk of missed pathology, and ultimately enhance the standard of patient care.
Wang, S.; Ayubcha, C.; Hua, Y.; Beam, A.
Show abstract
Background: Developing generalizable neuroimaging models is often hindered by limited labeled data which has led to an increased interest in unsupervised inverse learning. Existing approaches often neglect geometric principles and struggle with diverse pathologies. We propose a symmetry-informed inverse learning foundation model to address these shortcomings for robust and efficient anomaly detection in brain MRI. Methods: Our framework employs a reconstruction-to-embedding pipeline, trained exclusively on healthy brain MRI slices. A 2D U-Net uses a novel, symmetry-aware masking strategy to reconstruct a disorder-free slice. Difference maps are embedded into a 1024-dimensional latent space via a Beta-VAE. Anomaly scoring is performed using Mahalanobis distance. We evaluated generalization by fine-tuning on external lesion datasets, BraTS Africa (SSA), and the ADNI-derived Alzheimer disease cohort (Alz). Results: On the source metastasis (Mets) dataset, the framework achieved high performance (AB1+MSE: 99.28% accuracy, 99.79% sensitivity). Generalization to the external lesion dataset (SSA) was robust, with the Symmetry ROC configuration achieving 91.93% accuracy. Transfer to the Alzheimer dataset (Alz) was more challenging, achieving a peak accuracy of 70.54% with a high false-positive rate, suggesting difficulty in separating subtle, diffuse changes. Conclusion: The symmetry-informed inverse learning framework establishes a robust foundation model for neuroimaging, showing strong performance for focal lesions and successful generalization under domain shift. Limitations in diffuse neurodegeneration underscore the necessity for richer representations and multimodal integration to improve future foundation models.
Issa, F.; Trad, F.; Zein, N.; Abunasser, S.; Nizamuddin, P. B.; Salameh, I.; Ayoub, H.; Al-Abbadi, B.; Al-Hiary, M.; Abou-Nouar, Z.; Al-Subeihi, O.; Al-Zubi, Y.; Al-Manaseer, A.; Al-Jaloudi, A.; Nasrallah, D.; Younes, S.; Younes, N.; Abdallah, M.; Pieri, M.; Nicolai, E.; YASSINE, H. M.; Abu-Raddad, L. J.; Nasrallah, G.
Show abstract
Introduction: Herpes simplex virus type 1 (HSV-1) is highly prevalent worldwide, making accurate serological testing essential for both clinical diagnosis and epidemiological surveillance. Automated chemiluminescent immunoassays (CLIAs) offer operational advantages over enzyme-linked immunosorbent assays (ELISAs); however, their diagnostic performance relative to Western blot (WB) confirmation in high-prevalence settings remains insufficiently characterized. Hypothesis/Gap Statement: The comparative diagnostic accuracy of CLIA- and ELISA-based assays for HSV-1 IgG detection, when benchmarked against a WB reference standard in endemic populations, remains unclear. Aim: This study aimed to evaluate HSV-1 IgG seroprevalence and diagnostic performance of one CLIA and two ELISA platforms using Western blot as the reference method. Methodology: Four hundred archived serum samples from adult male craft and manual workers in Qatar were tested using the Mindray CL-900i CLIA, HerpeSelect ELISA, NovaLisa ELISA, and Euroimmun Western blot. Seroprevalence, diagnostic accuracy, and interassay agreement were assessed using WB as the reference standard, with equivocal and indeterminate results excluded from analysis. Results: HSV-1 IgG seroprevalence estimates were comparable across assays: HerpeSelect 72.5%, Mindray 70.5%, NovaLisa 66.3%, and Western blot 66.5%, with no statistically significant differences (all p > 0.05). The Mindray CLIA demonstrated the highest diagnostic performance (sensitivity 95.7%, specificity 88.9%, accuracy 93.4%) and strong agreement with Western blot ({kappa} = 0.85). HerpeSelect showed substantial agreement ({kappa} = 0.81), while NovaLisa exhibited lower specificity. Conclusion: CLIA- and ELISA-based assays produced comparable HSV-1 seroprevalence estimates in this high-prevalence population; however, diagnostic accuracy varied across platforms. The CLIA platform demonstrated the strongest agreement with Western blot, supporting its use in high-throughput settings, while confirmatory testing remains important to minimize misclassification.
Xu, M.; Philips, R.; Singavarapu, A.; Zheng, M.; Martin, D.; Nikolin, S.; Mutz, J.; Becker, A.; Firenze, R.; Tsai, L.-H.
Show abstract
Background: Gamma oscillation dysfunction has been implicated in neuropsychiatric disorders. Restoring gamma oscillations via brain stimulation represents an emerging therapeutic approach. However, the strength of its clinical effects and treatment moderators remain unclear. Method: We conducted a systematic review and meta-analysis to examine the clinical effects of gamma neuromodulation in neuropsychiatric disorders. A literature search for controlled trials using gamma stimulation was performed across five databases up until April 2025. Effect sizes were calculated using Hedge's g. Separate analyses using the random-effects model examined the clinical effects in schizophrenia (SZ), major depressive disorder (MDD), bipolar disorder, and autism spectrum disorder. For SZ and MDD, subgroup analyses evaluated the effects of stimulation modality, stimulation frequency, treatment duration, and pulses per session. Result: Fifty-six studies met the inclusion criteria (NSZ = 943, NMDD = 916, NBD = 175, NASD = 232). In SZ, gamma stimulation was associated with improvements in positive (k = 10, g = -0.60, p < 0.001), negative (k = 12, g = -0.37, p = 0.03), depressive (k = 8, g = -0.39, p < 0.001), anxious symptoms (k = 5, g = -0.59, p < 0.001), and overall cognitive function (k = 7, g = 0.55, p < 0.001). Stimulation frequency and treatment duration moderated therapeutic effects. In MDD, reductions in depressive symptoms were observed (k = 23, g = -0.34, p = 0.007). Conclusion: Gamma neuromodulation showed moderate therapeutic benefits in SZ and MDD. Substantial heterogeneity likely reflects protocol differences, highlighting the need for well-powered future trials.
Quide, Y.; Lim, T. E.; Gustin, S. M.
Show abstract
BackgroundEarly-life adversity (ELA) is a risk factor for enduring pain in youth and is associated with alterations in brain morphology and function. However, it remains unclear whether ELA-related neurobiological changes contribute to the development of enduring pain in early adolescence. MethodsUsing data from the Adolescent Brain Cognitive Development (ABCD) Study, we examined multimodal magnetic resonance imaging (MRI) markers in children assessed at baseline (ages 9-11 years) and at 2-year follow-up (ages 11-13 years). ELA exposure was defined at baseline to maximise temporal separation between early adversity and later enduring pain. Participants with enduring pain at follow-up (n = 322) were compared to matched pain-free controls (n = 644). Structural MRI, diffusion MRI (fractional anisotropy, mean diffusivity), and resting-state functional connectivity data were analysed. Linear models tested main effects of enduring pain, ELA, and their interaction on brain metrics, controlling for relevant covariates. ResultsELA exposure was associated with smaller caudate and nucleus accumbens volumes, and reduced surface area of the left rostral middle frontal gyrus. No significant effects of enduring pain or ELA-by-enduring pain interaction were observed across grey matter, white matter, or functional connectivity measures. ConclusionsELA was associated with alterations in fronto-striatal regions in late childhood, but these changes were not linked to enduring pain in early adolescence. These findings suggest that ELA-related neurobiological alterations may represent early markers of vulnerability rather than concurrent correlates of enduring pain. Longitudinal follow-up is needed to determine whether these alterations contribute to later chronic pain risk.
Spann, D. J.; Hall, L. M.; Moussa-Tooks, A.; Sheffield, J. M.
Show abstract
BackgroundNegative symptoms are core features of schizophrenia that relate strongly to functional impairment, yet interventions targeting these symptoms remain largely ineffective. Emerging theoretical work highlights how environmental factors may shape and maintain negative symptoms. Although racial disparities in schizophrenia diagnosis among Black Americans are well documented and linked to racial stress and psychosis, the impact of racial stress on negative symptoms has not been examined. This study provides an initial test of a novel theory proposing that racial stress - here measured by racial discrimination - influences negative symptom severity through exacerbation of negative cognitions about the self, particularly defeatist performance beliefs (DPB). Study DesignParticipants diagnosed with schizophrenia-spectrum disorder (SSD) (N = 208; 80 Black, 128 White) completed the Positive and Negative Syndrome Scale (PANSS), the Defeatist Beliefs Scale, and self-report measures of subjective racial and ethnic discrimination (Racial and Ethnic Minority Scale and General Ethnic Discrimination Scale). Relationships among variables were tested using linear regression and mediation analysis. Study ResultsBlack participants exhibited significantly greater total and experiential negative symptoms than White participants with no group difference in DPB. Racial discrimination explained 46% of the relationship between race and negative symptoms. Among Black participants, higher DPB were associated with greater negative symptom severity. Discrimination was positively related to both DPB and negative symptoms. DPB partially mediated the relationship between discrimination and negative symptoms. ConclusionsFindings suggest that racial stress contributes to negative symptom severity via defeatist beliefs among Black individuals, highlighting potential targets for culturally informed interventions.
Xu, J.; Parker, R. M. A.; Bowman, K.; Clayton, G. L.; Lawlor, D. A.
Show abstract
Background Higher levels of sedentary behaviour, such as leisure screen time (LST), and lower levels of physical activity are associated with diseases across multiple body systems which contribute to a large global health burden. Whether these associations are causal is unclear. The primary aim of this study is to investigate the causal effects of higher LST (given greater power) and, secondarily, lower moderate-to-vigorous intensity physical activity (MVPA), on a wide range of diseases in a hypothesis-free approach. Methods A two-sample Mendelian randomisation phenome-wide association study was conducted for the main analyses. Genetic single nucleotide polymorphisms (SNPs) were first selected as exposure genetic instruments for LST (hours of television watched per day; 117 SNPs) and MVPA (higher vs. lower; 18 SNPs) based on the genome-wide significant threshold (p < 5*10-8) from the largest relevant genome-wide association study (GWAS). For disease outcomes, we used summary results from FinnGen GWAS, including 1,719 diseases defined by hospital discharge International Classification of Diseases (ICD) codes in 453,733 European participants. For the main analyses, we used the inverse-variance weighting method with a Bonferroni corrected p-value of p [≤] 3.47*10-4. Sensitivity analyses included Steiger filtering, MR-Egger and weighted median analyses, and data from UK Biobank were used to explore replication. Findings Genetically predicted higher LST was associated with increased risk of 87 (5.1% of the 1,719) diseases. Most of these diseases were in musculoskeletal and connective tissue (n=37), genitourinary (n=12) and respiratory (n=8) systems. Genetic liability to lower MVPA was associated with six diseases: three in musculoskeletal and connective tissue and genitourinary systems (with greater risk of these diseases also identified with higher LST), and three in respiratory and genitourinary systems. Sensitivity analyses largely supported the main analyses. Results replicated in UK Biobank, where data available. Conclusions Higher levels of sedentary behaviour, and lower levels of physical activity, causally increase the risk of diseases across multiple body systems, making them promising targets for reducing multimorbidity.
Pietilainen, O.; Salonsalmi, A.; Rahkonen, O.; Lahelma, E.; Lallukka, T.
Show abstract
Objectives: Longer lifespans lead to longer time on retirement, despite the efforts to raise the retirement age. Therefore, it is important to study how the retirement years can be spent without diseases. This study examined socioeconomic and sociodemographic differences in healthy years spent on retirement. Methods: We followed a cohort of retired Finnish municipal employees (N=4231, average follow-up 15.4 years) on national administrative registers for major chronic diseases: cancer, coronary heart disease, cerebrovascular disease, diabetes, asthma or chronic obstructive pulmonary disease, dementia, mental disorders, and alcohol-related disorders. Median healthy years on retirement and age at first occurrence of illness (ICD-10 and ATC-based) in each combination of sex, occupational class, and age of retirement were predicted using Royston-Parmar models. Prevalence rates for each diagnostic group were calculated. Results: Most healthy years on retirement were spent by women having worked in semi-professional jobs who retired at age 60-62 (median predicted healthy years 11.6, 95% CI 10.4-12.7). The least healthy years on retirement were spent by men having worked in routine non-manual jobs who retired after age 62 (median predicted healthy years 6.5, 95% CI 4.4-9.5). Diabetes was slightly more common among lower occupational class women, and dementia among manual working women having retired at age 60-62. Discussion: Healthy years on retirement are not enjoyed equally by women and men and those who retire early or later. Policies aiming to increase the retirement age should consider the effects of these gaps on retirees and the equitability of those effects.
Hung, J.; Smith, A.
Show abstract
The global ambition to end the human immunodeficiency virus (HIV) epidemic requires understanding which system-level policy levers, enacted under the framework of Universal Health Coverage (UHC), are most effective in achieving both transmission reduction and diagnostic coverage. This study addresses an important evidence gap by quantifying the within-country association between measurable UHC policy indicators and the estimated rate of new HIV infections across nine Southeast Asian countries between 2013 and 2022. Employing a Fixed-Effects panel data methodology, the analysis controls for time-invariant national heterogeneity, ensuring reliable estimates of policy impact. We found that marginal changes in total current health expenditure (CHE) as a percentage of gross domestic product (GDP) were not statistically significantly associated with changes in HIV incidence. However, increases in the UHC Infectious Disease Service Coverage Index were statistically significantly associated with concurrent reductions in HIV incidence (p < 0.001), suggesting the efficacy of targeted service implementation as the principal driver of curbing new HIV infections. In addition, the UHC Reproductive, Maternal, Newborn, and Child Health Service Coverage Index exhibited a statistically significant positive association with changes in HIV incidence (p < 0.01), which is interpreted as a vital surveillance artefact resulting from expanded detection and reporting of previously undiagnosed HIV cases. Furthermore, out-of-pocket (OOP) health expenditure as a percentage of CHE showed a counter-intuitive negative association with changes in HIV incidence (p < 0.01), suggesting this metric primarily shows ongoing indirect cost burdens on the established patient cohort, or, alternatively, presents a diagnostic access barrier that results in lower case finding. These findings suggest that policymakers should prioritise investment in targeted infectious disease service efficacy over aggregate fiscal commitment and utilise integrated sexual health platforms for strengthened HIV surveillance and case identification.
Hassan, S. S.; Nordqvist-Kleppe, S.; Asinger, N.; Wang, J.; Dillner, J.; Arroyo Muhr, L. S.
Show abstract
Human papillomavirus (HPV) testing is the primary method for cervical cancer screening, and a negative HPV test is associated with a very low subsequent risk of invasive cancer. Nevertheless, a small number of cervical cancers are diagnosed following an HPV-negative testing result, posing challenges within HPV-based screening pathways. Using nationwide Swedish registry data of HPV testing, we identified women diagnosed with invasive cervical cancer between 2019 and 2024 and reconstructed HPV testing histories from the National Cervical Screening Registry (NKCx). The most recent HPV test prior to diagnosis was defined as the index test, and longitudinal HPV testing trajectories were classified among women with an HPV-negative index test. Of 3,000 women diagnosed with invasive cancer, 243 (8.1%) had an HPV-negative index test. These women were older at diagnosis and more frequently diagnosed at advanced stages compared with women with an HPV-positive index test. Most HPV-negative index tests (66.3%) were performed in the peri-diagnostic period (+/- 30 days). Among women with an HPV-negative index test, 52.7% (128/243) had no prior HPV testing recorded, while the remainder had consistently HPV-negative histories (33.3%, 83/243) or evidence of prior HPV positivity before the index negative test (14%, 32/243). Possible recurrent HPV positivity following an intervening negative test was rare (0.4%, 1/243). HPV-negative screening results preceding invasive cancer reflect heterogeneous screening histories and cannot be explained solely by test failure. Findings highlighting the importance of reaching women earlier in screening programs and show that fluctuating HPV detectability is rare.
Xiao, M.; Girard, Q.; Pender, M.; Rabezara, J. Y.; Rahary, P.; Randrianarisoa, S.; Rasambainarivo, F.; Rasolofoniaina, O.; Soarimalala, V.; Janko, M. M.; Nunn, C. L.
Show abstract
PurposeAntibiotic use (ABU) is a major driver of antimicrobial resistance (AMR), but ABU patterns are poorly understood in low-income countries where the burden of AMR is great and ABU is insufficiently regulated. Here, we report ABU from ten sites ranging from rural villages to small cities in Madagascar, a country with high AMR levels, and present results from modeling to identify factors that may be associated with ABU in this setting. MethodsWe conducted surveys of 290 individuals from ten sites in the SAVA Region of northeast Madagascar to gather data on sociodemographic characteristics, agricultural and animal husbandry practices, recent antibiotic use, the antibiotics that participants recalled using in their lifetimes, and the sources of their antibiotics. Using these data, we conducted statistical analyses with a mixed-effects logistic model to determine which characteristics were associated with recent antibiotic use. ResultsNearly all respondents (N=283, 97.6%) reported ABU in their lifetimes, with amoxicillin being the most widely reported antibiotic (N=255, 90.1% of those reporting ABU). All recalled antibiotics were classified as frontline drugs except for ciprofloxacin. Most respondents who reported antibiotic use also reported obtaining antibiotics without prescriptions from local stores (N=273, 96.5%), while only 52.3% (N=148) reported obtaining antibiotics through a prescriptive route, such as from a health clinic or private doctor. Of the 127 individuals (44.9%) who reported recent ABU, men were found to be significantly less likely to have recently taken antibiotics than women. ConclusionsOur findings provide new insights into ABU in agricultural settings in low-income countries, which have historically been understudied in AMR and pharmacoepidemiologic research. Knowledge of ABU patterns supports understanding of AMR dynamics and AMR control efforts in these contexts, such as interventions on inappropriate antibiotic dispensing. Key pointsO_LIAntibiotic use (ABU) in Madagascar is largely unstudied despite its role in antimicrobial resistance (AMR), which Madagascar faces a high burden of. C_LIO_LIABU was widespread among livestock owners in northeast Madagascar, with the majority of study participants reporting ABU in their lifetimes and most people reporting ABU also having taken antibiotics in the previous three months. C_LIO_LIMost respondents reported obtaining their antibiotics from non-pharmaceutical stores, indicating high levels of unregulated ABU, though more than half also reported sourcing their antibiotics through prescriptive means (like doctors and health clinics). C_LIO_LIMen were less likely than women to have taken antibiotics in the previous three months. C_LIO_LIThese findings support the development of interventions to mitigate the burden of AMR in Madagascar and similar contexts while underscoring the need for more comprehensive research on the drivers and patterns of ABU. C_LI Plain language summaryIn this study, we provide basic information on antibiotic use (ABU) patterns in Madagascar, a country that experiences high levels of resistance but has been particularly understudied in AMR and pharmacological research. We surveyed 290 farmers with livestock from ten sites across northeast Madagascar about their ABU and found that nearly all study participants (N=283, 97.6%) have used antibiotics in their lifetimes, while a little under half of those who reported ABU also reported using antibiotics in the previous three months (N=127, 44.9%). The most used antibiotic was amoxicillin (N=255, 90.1%). Most people obtained their antibiotics from sources that do not require prescriptions, like general stores, indicating that most ABU is unregulated. Through modeling, we also found that men were less likely than women to have taken antibiotics in the previous three months (OR=0.50, CI 0.30-0.82). These findings help us better understand the dynamics of ABU in low-income countries, which have historically been understudied in AMR and pharmacological research. They also support efforts to mitigate the burden of AMR by revealing ABU dynamics that may contribute to the emergence and spread of AMR, as well as identifying targets for intervention to curb inappropriate ABU.
Shaetonhodi, N. G.; De Vos, L.; Babalola, C.; de Voux, A.; Joseph Davey, D.; Mdingi, M.; Peters, R. P. H.; Klausner, J. D.; Medina-Marino, A.
Show abstract
BackgroundCurable sexually transmitted infections (STIs), including Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis, remain highly prevalent among pregnant women in South Africa. Despite poor diagnostic performance in pregnancy, syndromic management remains standard care. Point-of-care (POC) screening enables aetiological diagnosis and same-visit treatment but is not yet included in national guidelines. We conducted a mixed-methods process evaluation to examine determinants of antenatal POC STI screening implementation in public facilities. MethodsThis evaluation was embedded within the three-arm Philani Ndiphile randomized trial (March 2021-February 2025) across four public clinics in the Eastern Cape. Screening used a near-POC, electricity-dependent nucleic acid amplification test with a 90-minute turnaround time. Reach, Adoption, Implementation, and Maintenance were assessed using the RE-AIM framework. Quantitative indicators included uptake of screening, treatment, and follow-up attendance. Qualitative data included in-depth interviews with 20 pregnant women and five focus group discussions with 21 research staff and government healthcare workers. The Consolidated Framework for Implementation Research guided qualitative analysis. Findings were integrated using narrative weaving. ResultsScreening uptake was high (99.0%), with treatment coverage of 95.2% at baseline and 93.5% at repeat screening. Same-day treatment was lower (50.7% and 69.8%) and varied substantially by facility, reflecting operational constraints including turnaround time, patient volume, infrastructure, and electricity. Attendance was higher when screening was integrated into routine ANC. Women valued screening for infant health, while providers recognised advantages over syndromic management but highlighted workforce, resource, and maintenance constraints. Socioeconomic factors, including transport costs, hunger, and work commitments, influenced retention and waiting. ConclusionsAntenatal POC STI screening was acceptable and achieved high treatment coverage in a research setting. However, same-day treatment was constrained by operational requirements of the testing platform. Scale-up will require workflow integration, strengthened health system capacity, and faster diagnostics suited to routine antenatal care. Key MessagesO_ST_ABSWhat is already known on this topicC_ST_ABSSyndromic management remains standard antenatal care in many low-resource settings despite failing to capture up to 89% of infections that remain asymptomatic. Point-of-care aetiological screening has demonstrated feasibility, acceptability, and potential clinical benefit in research settings, yet has not been widely adopted into national policy. Limited evidence exists on the health system requirements and contextual determinants influencing scale-up within routine public facilities. What this study addsThis mixed-methods process evaluation demonstrates high uptake and treatment coverage of antenatal POC STI screening in a trial setting, while identifying facility-level, structural, and socioeconomic factors shaping same-day treatment and retention. We show that implementation success varies substantially across clinics and depends on assay characteristics, workflow integration, human resources, infrastructure reliability, and follow-up capacity. How this study might affect research, practice or policyThese findings provide implementation-relevant evidence to inform national policy deliberations on integrating POC STI screening into antenatal care. Sustainable scale-up will require context-adapted delivery models, strengthened workforce and supply systems, faster diagnostics, and alignment with existing ANC workflows to ensure equitable and durable impact.
Heffernan, P. M.; van den Berg, H.; Yadav, R. S.; Murdock, C. C.; Rohr, J. R.
Show abstract
BackgroundInsecticides remain the cornerstone of mosquito vector control for malaria, dengue, and other mosquito-borne diseases, yet global patterns of deployment and their socioeconomic and environmental drivers are poorly characterized. Understanding where and why insecticides are used is essential for better targeting control efforts and ensuring they are effective, equitable, and efficient. MethodsWe analyzed annual country-level insecticide-use data from 122 countries (1990-2019), reported as standard spray coverage for insecticide-treated nets (ITNs), residual spraying (RS), spatial spraying (SS), and larviciding (LA). Generalized linear mixed models and hurdle models quantified associations between deployment and disease incidence, human development index (HDI), human population density, temperature, and precipitation. Models were evaluated using repeated cross-validation and applied to generate downscaled predictions of insecticide use at subnational administrative region level 2 (ADM2) globally. FindingsInsecticide deployment increased with malaria and dengue incidence, but this response was substantially stronger in higher-HDI countries, indicating that deployment depends on socioeconomic capacity as well as disease burden that leads to weaker scaling in lower-resource settings. Intervention types exhibited distinct patterns; ITN use tracked malaria burden, whereas infrastructure-intensive approaches (e.g., RS and SS) were concentrated in higher-HDI settings and increased with Aedes-borne disease incidence. Downscaled ADM2-level maps uncovered substantial within-country heterogeneity that is obscured at the national scale, highlighting regions where predicted deployment remains low relative to disease risk across sub-Saharan Africa, South Asia, and parts of Latin America. InterpretationGlobal insecticide deployment reflects not only epidemiological need but also economic and logistical capacity, creating mismatches between risk and control. High-resolution mapping can support more equitable allocation of interventions, guide insecticide resistance stewardship, and improve strategic planning as climate and urbanization reshape mosquito-borne disease risk.
Maneraguha, F. K.; Cote, J.; Bourbonnais, A.; Arbour, C.; Chagnon, M.; Hatem, M.
Show abstract
Background Comprehensive sexuality education (CSE) is essential to the health and well-being of young people. In the Democratic Republic of Congo (DRC), where more than 65% of the population is under the age of 25, access to interpersonal CSE remains limited owing to sociocultural and structural barriers. This exposes young people to persistent socio-sanitary vulnerabilities. In this context, mobile health apps (MHAs) constitute a promising solution, supported by the growing use of smartphones among young Congolese. However, this group's intention to use MHAs for CSE has been the subject of little research to date. Objective The aim of this study was to identify predictors of intention to use MHAs among young Congolese, based on the extended Unified Theory of Acceptance and Use of Technology (UTAUT2). Methods A predictive correlational study was conducted in eight public secondary schools in Bukavu (DRC) with a stratified random sample of 859 students. Predictors of intention to use--performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC), and perceived risk (PR)--and moderators--age, gender, and past MHA experience--were measured from data collected through a self-administered UTAUT questionnaire. Descriptive and multivariate analyses were run on SPSS version 28. Results Mean age of participants was 16.3 years (SD = 1.5). Boys made up 55.1% of the sample. Overall, 51.0% of the sample owned a smartphone, of which 62.3% reported having easy access to mobile data and 16.2% were already using MHAs to learn about sexual health. Intention to use MHAs was positively influenced by PE ({beta} = 0.523, p < 0.001), EE ({beta} = 0.115, p < 0.001), and SI ({beta} = 0.113, p < 0.001). FC (p = 0.260) and PR (p = 0.631), however, had no significant influence. Age moderated all of the relationships tested (F (1, 849-854) = 9.97-20.82; p [≤] 0.002), with more marked effects observed among younger participants 14-15 years old. The final model explained 44% of the variance, indicating good predictive power. Conclusion Intention to use digital CSE was explained primarily by PE, EE, and SI and moderated by age. To strengthen this intention, stakeholders will need to promote e-interventions that are pertinent, easy to use, socially valorized, and tailored to young people's needs and to the local context.
Malingumu, E. E.; Badaga, I.; Kisendi, D. D.; Pierre Kabore, R. W.; Yeremon, O. G.; Mohamed, M. A.; He, Q.
Show abstract
This study evaluates the feasibility of implementing artificial intelligence (AI)-driven disease surveillance systems at Julius Nyerere International Airport (JNIA) in Tanzania, a key hub for regional and international travel. Through a mixed-methods approach combining qualitative interviews and quantitative surveys, the research assesses the infrastructure, human resource capacity, and regulatory frameworks necessary for AI integration. Findings indicate that while Port Health Officers are strongly optimistic about AIs potential to enhance disease detection, the airport faces significant barriers, including outdated infrastructure, insufficient technical resources, and a lack of trained personnel. Ethical and privacy concerns, particularly surrounding data security, also emerged as key challenges, compounded by limited public awareness and the socio-cultural acceptability of AI systems. Furthermore, the study identifies gaps in national policies and inter-agency coordination that hinder the effective implementation of AI technologies. The research concludes that while current conditions render AI adoption infeasible, strategic investments in infrastructure, workforce training, and policy development could pave the way for future integration, enhancing public health surveillance at JNIA and potentially other airports in low- and middle-income countries. This study contributes critical insights into the barriers and opportunities for AI-driven disease surveillance in low-resource settings, specifically focusing on a high-priority transit point, international airports. It emphasizes the importance of region-specific solutions to enhance health security in East Africa and supports the broader global health agenda by advocating for international collaboration and the development of scalable disease surveillance systems. Future research should explore pilot AI implementations at other airports to evaluate real-world challenges and refine AI systems for broader applicability, including cost-effectiveness analyses and integration of public perspectives on AI.